Determining Test Levels
Considerations for test level classification:
-
Development method used/selected - Various development methods each have their own testing requirements.
This may be a consideration to simply adopt the test levels from such a method.
-
Organisation structure of the departments - If an organisation has a functional department structure
(production, user, management departments, etc), you might consider letting each relevant department plan its
tests, etc independently of the other departments. This occurs, for instance, with respect to the test levels
production acceptance test, users acceptance test and/or functional acceptance test.
-
Separate responsibilities - A delivering party has other (test) responsibilities than the accepting party.
Further divisions are often logical within these two parties. If, for instance, a production department is held
responsible for the stable operation of a system, you might consider instituting the test level production
acceptance test. It may be even more specific if this department is also responsible for a specific performance of
the system, for instance. In this case, the test type performance test might be organised as a separate test level.
Also when components or services from third parties are used, for instance in service oriented architectures, it is
recommended to organise acceptance of these components or services as a separate test level.
-
Various stakeholders - If a stakeholder wants to be certain that his wishes/requirements are complied
with, a separate test level can be defined to this end. For a package implementation, for instance, where the user
is not convinced that the package contains all of the required functionalities. You might then define the test
level functional (acceptance) test.
-
Risk level - In the event of high risks, a separate test level might be created to ensure adequate focus
on risk testing. E.g. if a system has many (high-risk) interfaces, the test level system integration test may be a
good addition. Or for instance when security or performance constitute great risks, you might consider organising
these test types as a separate test level (security test, performance test).
-
Test and development maturity - If testing and development are insufficiently mature, it is often wise to
refrain from combining or integrating test levels. In such a situation, the test levels system test (executed by
the supplying party) and functional test (by the accepting party) are often defined separately. However, if
maturity is high, these tests might be combined into a so-called combined test (CT), with the aims of both parties
being maintained but the organisation and execution being combined.
-
Contractual agreements - In a demand-supply situation, a supplying party must comply with contractual
agreements. This may mean that the party must demonstrate, in a system test, that specifi c requirements have been
met, or that an Internet application cannot be hacked by means of a security test, or that a performance test
demonstrates that the package has an acceptable response time when using production data. Especially when certifi
cation is involved (for smart cards, for security for web applications, for compliancy schemes like SOX), we
recommend creating a separate test level for these issues.
-
Availability of infrastructure - Sometimes one is forced to review a certain test level classification
because some test infrastructures simply have limited accessibility. For instance, a separate system integration
test may be defined because an external party wants to make its test environment available on a limited basis
during a certain period for executing tests (‘test window’).
-
Integration level - Generally, we could say that if the complexity of (the environment of) a system
increases, you might consider splitting up the tests into multiple test levels to make and keep everything
manageable. Furthermore, it may be impossible to test some quality characteristics until the system has reached a
certain level of integrity. An example of this is performance. You would prefer to measure it as early on as
possible, but often it cannot be realistically measured until the system is as good as complete. This might be a
consideration to define a separate test level for this purpose.
-
Purchase or acceptance - To distinguish between purchase and acceptance, you could say that purchase is
mainly about whether the defined requirements have been met as well as the obligation to pay (cf. the consideration
contractual agreements). Acceptance, however, is mainly about the question: ‘Can the organisation work with this?’
In this case, requirements that are not described are also involved.
The following deviations occur regularly in practice:
-
End-to-end test - An end-to-end test requires alignment between various systems, parties, infrastructures,
etc. In many organisations this is so complex, that the test is organised as a separate test level - system
integration test - with its own test manager and testers.
-
Combined test - The system test and (functional) acceptance test usually have the same aim from the
perspective of content, i.e. testing whether the system is functionally correct. The difference is the party
responsible: supplier or client. From the perspective of efficiency, these test levels can be combined into one
single combined test, on the condition that good agreements have been reached about tasks and responsibilities,
management and verification of correct execution.
-
Security test, performance test, usability test - If a test level requires a highly specific test
environment or expertise, it may be useful to organise it as a separate test level with its own plan and
organisation. Such tests are often outsourced for this reason.
-
Proof-of-concept test - In case of high uncertainty, i.e. when it is extremely difficult to assess the
risks correctly, a so-called proof-of-concept test is sometimes set up. This test level often appears in
large-scale migrations. The proof-of-concept test verifies whether the proposed project and/or development approach
will work. This is done by building and testing a small (about 10%) but representative part of the system at an
early stage. The results and experiences provide more certainty about the total process and help optimise
everything.
Determining Thoroughness Of Testing
As described earlier, evaluation can also be in the scope of the master test plan and therefore covered by the activity
establishing the strategy. Only the documentation that is (formally) delivered can be evaluated. Such documentation may
include: designs at all levels (requirements, use cases, functional design, process design, technical design, AO
procedures), meeting reports, results of brainstorm sessions, etc.
Overlap in tests
An important aim of allocating characteristics/object parts to test levels is to ensure that tests are not executed in
duplicate or forgotten unintentionally. Clearly, in some situations, it can be decided to have multiple parties execute
similar tests. This is useful, for instance, if a certain test aspects can be investigated from different perspectives.
For instance the characteristic security can be included in the UAT and in the PAT. The UAT tests the authorisations,
while the PAT looks at the technical functioning of the firewalls.
Increments
Developers work with increments, a sort of intermediate release, in iterative or agile system development.
Each increment contains more functionality. The test manager can take this into account when determining the strategy
by creating one overall master test plan strategy with the general testing approach, after which the separate test
levels determine the strategy for each increment. Alternatively, the test manager determines a small (master test plan)
strategy for each increment instead of one big one.
|